Generalized Hopfield Networks and Nonlinear Optimization

نویسندگان

  • Gintaras V. Reklaitis
  • Athanasios G. Tsirukis
  • Manoel Fernando Tenorio
چکیده

Manoel F. Tenorio Dept of Electrical Eng. Purdue University W. Lafayette, IN. 47907 A nonlinear neural framework, called the Generalized Hopfield network, is proposed, which is able to solve in a parallel distributed manner systems of nonlinear equations. The method is applied to the general nonlinear optimization problem. We demonstrate GHNs implementing the three most important optimization algorithms, namely the Augmented Lagrangian, Generalized Reduced Gradient and Successive Quadratic Programming methods. The study results in a dynamic view of the optimization problem and offers a straightforward model for the parallelization of the optimization computations, thus significantly extending the practical limits of problems that can be formulated as an optimization problem and which can gain from the introduction of nonlinearities in their structure (eg. pattern recognition, supervised learning, design of content-addressable memories). 1 To whom correspondence should be addressed. 356 Reklaitis, Tsirukis and Tenorio 1 RELATED WORK The ability of networks of highly interconnected simple nonlinear analog processors (neurons) to solve complicated optimization problems was demonstrated in a series of papers by Hopfield and Tank (Hopfield, 1984), (Tank, 1986). The Hopfield computational model is almost exclusively applied to the solution of combinatorially complex linear decision problems (eg. Traveling Salesman Problem). Unfortunately such problems can not be solved with guaranteed quality, (Bruck, 1987), getting trapped in locally optimal solutions. Jeffrey and Rossner, (Jeffrey, 1986), extended Hopfield's technique to the nonlinear unconstrained optimization problem, using Cauchy dynamics. Kennedy and Chua, (Kennedy, 1988), presented an analog implementation of a network solving a nonlinear optimization problem. The underlying optimization algorithm is a simple transformation method, (Reklaitis, 1983), which is known to be relatively inefficient for large nonlinear optimization problems. 2 LINEAR HOPFIELD NETWORK (LHN) The computation in a Hopfield network is done by a collection of highly interconnected simple neurons. Each processing element, i, is characterized by the activation level, Ui, which is a function of the input received from the external environment, Ii, and the state of the other neurons. The activation level of i is transmitted to the other processors, after passing through a filter that converts Ui to a 0-1 binary value, Vi' The time behavior of the system is described by the following model: U ' ~ T·V· -' + I· ~ 'J J R . ' J ' where Tij are the interconnection strengths. The network is characterized as linear, because the neuron inputs appear linearly in the neuron's constitutive equation. The steady-state of a Hopfield network corresponds to a local minimum of the corresponding quadratic Lyapunov function: V. E = ~ ~ ~ TijV1 Vj + ~IiVi + ~ (;) So sjl(V)dV , J ' " If the matrix [Tij ] is symmetric, the steady-state values of Vi are binary These observations tum the Hopfield network to a very useful discrete optimization tool. Nonetheless, the linear structure poses two major limitations: The Lyapunov (objective) function can only take a quadratic form, whereas the feasible region can only have a hypercube geometry (-1 ~ Vi ~ 1). Therefore, the Linear Hopfield Network is limited to solve optimization problems with quadratic objective function and linear constraints. The general nonlinear optimization problem requires arbitrarily nonlinear neural interactions. Generalized Hopfield Networks and Nonlinear Optimization 357 3 THE NONLINEAR OPTIMIZATION PROBLEM The general nonlinear optimization problem consists of a search for the values of the independent variables Xi. optimizing a multivariable objective function so that some conditions (equality. hi. and inequality. gj. constraints) are satisfied at the optimum. optimize f (Xl. X2 • •••• XII) subject to hi (X I. X 2. . ..• XII) = 0 l = 1.2 ..... K. K < N aj ~ gj (Xl. X2 • •••• XII) ~ bj j = 1.2 ..... M 4' ~ Xk ~ xf k = 1.2 .... .N The influence of the constraint geometry on the shape of the objective function is described in a unified manner by the Lagrangian Function:

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Neural Networks for Solving Quadratic Assignment Problems

Abstract— In this paper the Hopfield neural networks are adopted to solve the quadratic assignment problem, which is a generalization of the traveling salesman’s problem (TSP), the graph-partitioning problem (GPP), and the matching problem. When the Hopfield neural network was applied alone, a sub-optimal solution was obtained. By adding the 2exchange we obtained a solution very close to the op...

متن کامل

Estimation of Network Reliability for a Fully Connected Network with Unreliable Nodes and Unreliable Edges using Neuro Optimization

In this paper it is tried to estimate the reliability of a fully connected network of some unreliable nodes and unreliable connections (edges) between them. The proliferation of electronic messaging has been witnessed during the last few years. The acute problem of node failure and connection failure is frequently encountered in communication through various types of networks. We know that a ne...

متن کامل

Solving Nonlinear Equations Using Recurrent Neural Networks

Abstract A class of recurrent neural networks is developed to solve nonlinear equations, which are approximated by a multilayer perceptron (MLP). The recurrent network includes a linear Hopfield network (LHN) and the MLP as building blocks. This network inverts the original MLP using constrained linear optimization and Newton’s method for nonlinear systems. The solution of a nonlinear equation ...

متن کامل

Nonlinear measures: a new approach to exponential stability analysis for Hopfield-type neural networks

In this paper, a new concept called nonlinear measure is introduced to quantify stability of nonlinear systems in the way similar to the matrix measure for stability of linear systems. Based on the new concept, a novel approach for stability analysis of neural networks is developed. With this approach, a series of new sufficient conditions for global and local exponential stability of Hopfield ...

متن کامل

Neuro-Optimizer: A New Artificial Intelligent Optimization Tool and Its Application for Robot Optimal Controller Design

The main objective of this paper is to introduce a new intelligent optimization technique that uses a predictioncorrectionstrategy supported by a recurrent neural network for finding a near optimal solution of a givenobjective function. Recently there have been attempts for using artificial neural networks (ANNs) in optimizationproblems and some types of ANNs such as Hopfield network and Boltzm...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1989